Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 5459, 2024 03 05.
Article in English | MEDLINE | ID: mdl-38443378

ABSTRACT

Roboticists often imbue robots with human-like physical features to increase the likelihood that they are afforded benefits known to be associated with anthropomorphism. Similarly, deepfakes often employ computer-generated human faces to attempt to create convincing simulacra of actual humans. In the present work, we investigate whether perceivers' higher-order beliefs about faces (i.e., whether they represent actual people or android robots) modulate the extent to which perceivers deploy face-typical processing for social stimuli. Past work has shown that perceivers' recognition performance is more impacted by the inversion of faces than objects, thus highlighting that faces are processed holistically (i.e., as Gestalt), whereas objects engage feature-based processing. Here, we use an inversion task to examine whether face-typical processing is attenuated when actual human faces are labeled as non-human (i.e., android robot). This allows us to employ a task shown to be differentially sensitive to social (i.e., faces) and non-social (i.e., objects) stimuli while also randomly assigning face stimuli to seem real or fake. The results show smaller inversion effects when face stimuli were believed to represent android robots compared to when they were believed to represent humans. This suggests that robots strongly resembling humans may still fail to be perceived as "social" due pre-existing beliefs about their mechanistic nature. Theoretical and practical implications of this research are discussed.


Subject(s)
Facial Recognition , Robotics , Humans , Social Perception , Chromosome Inversion , Physical Examination
2.
Sci Rep ; 13(1): 16708, 2023 10 04.
Article in English | MEDLINE | ID: mdl-37794045

ABSTRACT

When interacting with groups of robots, we tend to perceive them as a homogenous group where all group members have similar capabilities. This overgeneralization of capabilities is potentially due to a lack of perceptual experience with robots or a lack of motivation to see them as individuals (i.e., individuation). This can undermine trust and performance in human-robot teams. One way to overcome this issue is by designing robots that can be individuated such that each team member can be provided tasks based on its actual skills. In two experiments, we examine if humans can effectively individuate robots: Experiment 1 (n = 225) investigates how individuation performance of robot stimuli compares to that of human stimuli that either belong to a social ingroup or outgroup. Experiment 2 (n = 177) examines to what extent robots' physical human-likeness (high versus low) affects individuation performance. Results show that although humans are able to individuate robots, they seem to individuate them to a lesser extent than both ingroup and outgroup human stimuli (Experiment 1). Furthermore, robots that are physically more humanlike are initially individuated better compared to robots that are physically less humanlike; this effect, however, diminishes over the course of the experiment, suggesting that the individuation of robots can be learned quite quickly (Experiment 2). Whether differences in individuation performance with robot versus human stimuli is primarily due to a reduced perceptual experience with robot stimuli or due to motivational aspects (i.e., robots as potential social outgroup) should be examined in future studies.


Subject(s)
Facial Recognition , Robotics , Humans , Learning , Motivation , Trust
3.
Behav Brain Sci ; 46: e38, 2023 04 05.
Article in English | MEDLINE | ID: mdl-37017057

ABSTRACT

While we applaud the careful breakdown by Clark and Fischer of the representation of social robots held by the human user, we emphasise that a neurocognitive perspective is crucial to fully capture how people perceive and construe social robots at the behavioural and brain levels.


Subject(s)
Robotics , Humans , Social Interaction , Brain
4.
BMC Health Serv Res ; 22(1): 1529, 2022 Dec 15.
Article in English | MEDLINE | ID: mdl-36522664

ABSTRACT

BACKGROUND: Diabetes mellitus, cardiovascular diseases, chronic kidney disease, and thyroid diseases are chronic diseases that require regular monitoring through blood tests. This paper first investigates the experiences of chronic care patients with venipuncture and their expectations of an at-home blood-sampling device, and then assesses the impact on societal costs of implementing such a device in current practice. METHODS: An online survey was distributed among chronic care patients to gain insight into their experience of blood sampling in current practice, and their expectations of an at-home blood-sampling device. The survey results were used as input parameters in a patient-level monte carlo analysis developed to represent a hypothetical cohort of Dutch chronically ill patients to investigate the impact on societal costs compared to usual care. RESULTS: In total, 1311 patients participated in the survey, of which 31% experience the time spent on the phlebotomy appointment as a burden. Of all respondents, 71% prefer to use an at-home blood-sampling device to monitor their chronic disease. The cost analysis indicated that implementing an at-home blood-sampling device increases the cost of phlebotomy itself by €27.25 per patient per year, but it reduces the overall societal costs by €24.86 per patient per year, mainly due to limiting productivity loss. CONCLUSIONS: Patients consider an at-home blood-sampling device to be more user-friendly than venous phlebotomy on location. Long waiting times and crowded locations can be avoided by using an at-home blood-sampling device. Implementing such a device is likely cost-saving as it is expected to reduce societal costs.


Subject(s)
Patient Preference , Phlebotomy , Humans , Cost-Benefit Analysis , Blood Specimen Collection , Long-Term Care , Health Care Costs
5.
Hum Factors ; 64(3): 499-513, 2022 05.
Article in English | MEDLINE | ID: mdl-32955351

ABSTRACT

OBJECTIVE: Human problem solvers possess the ability to outsource parts of their mental processing onto cognitive "helpers" (cognitive offloading). However, suboptimal decisions regarding which helper to recruit for which task occur frequently. Here, we investigate if understanding and adjusting a specific subcomponent of mental models-beliefs about task-specific expertise-regarding these helpers could provide a comparatively easy way to improve offloading decisions. BACKGROUND: Mental models afford the storage of beliefs about a helper that can be retrieved when needed. METHODS: Arithmetic and social problems were solved by 192 participants. Participants could, in addition to solving a task on their own, offload cognitive processing onto a human, a robot, or one of two smartphone apps. These helpers were introduced with either task-specific (e.g., stating that an app would use machine learning to "recognize faces" and "read emotions") or task-unspecific (e.g., stating that an app was built for solving "complex cognitive tasks") descriptions of their expertise. RESULTS: Providing task-specific expertise information heavily altered offloading behavior for apps but much less so for humans or robots. This suggests (1) strong preexisting mental models of human and robot helpers and (2) a strong impact of mental model adjustment for novel helpers like unfamiliar smartphone apps. CONCLUSION: Creating and refining mental models is an easy approach to adjust offloading preferences and thus improve interactions with cognitive environments. APPLICATION: To efficiently work in environments in which problem-solving includes consulting other people or cognitive tools ("helpers"), accurate mental models-especially regarding task-relevant expertise-are a crucial prerequisite.


Subject(s)
Mobile Applications , Models, Psychological , Cognition , Emotions , Humans , Problem Solving
6.
Front Neurogenom ; 3: 959578, 2022.
Article in English | MEDLINE | ID: mdl-38235446

ABSTRACT

Robot faces often differ from human faces in terms of their facial features (e.g., lack of eyebrows) and spatial relationships between these features (e.g., disproportionately large eyes), which can influence the degree to which social brain [i.e., Fusiform Face Area (FFA), Superior Temporal Sulcus (STS); Haxby et al., 2000] areas process them as social individuals that can be discriminated from other agents in terms of their perceptual features and person attributes. Of interest in this work is whether robot stimuli are processed in a less social manner than human stimuli. If true, this could undermine human-robot interactions (HRIs) because human partners could potentially fail to perceive robots as individual agents with unique features and capabilities-a phenomenon known as outgroup homogeneity-potentially leading to miscalibration of trust and errors in allocation of task responsibilities. In this experiment, we use the face inversion paradigm (as a proxy for neural activation in social brain areas) to examine whether face processing differs between human and robot face stimuli: if robot faces are perceived as less face-like than human-faces, the difference in recognition performance for faces presented upright compared to upside down (i.e., inversion effect) should be less pronounced for robot faces than human faces. The results demonstrate a reduced face inversion effect with robot vs. human faces, supporting the hypothesis that robot faces are processed in a less face-like manner. This suggests that roboticists should attend carefully to the design of robot faces and evaluate them based on their ability to engage face-typical processes. Specific design recommendations on how to accomplish this goal are provided in the discussion.

7.
Front Psychol ; 12: 604977, 2021.
Article in English | MEDLINE | ID: mdl-34737716

ABSTRACT

With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.

8.
J Cogn ; 4(1): 28, 2021 May 31.
Article in English | MEDLINE | ID: mdl-34131624

ABSTRACT

Social agents rely on the ability to use feedback to learn and modify their behavior. The extent to which this happens in social contexts depends on motivational, cognitive and/or affective parameters. For instance, feedback-associated learning occurs at different rates when the outcome of an action (e.g., winning or losing in a gambling task) affects oneself ("Self") versus another human ("Other"). Here, we examine whether similar context effects on feedback-associated learning can also be observed when the "other" is a social robot (here: Cozmo). We additionally examine whether a "hybrid" version of the gambling paradigm, where participants are free to engage in a dynamic interaction with a robot, then move to a controlled screen-based experiment can be used to examine social cognition in human-robot interaction. This hybrid method is an alternative to current designs where researchers examine the effect of the interaction on social cognition during the interaction with the robot. For that purpose, three groups of participants (n total = 60) interacted with Cozmo over different time periods (no interaction vs. a single 20 minute interaction in the lab vs. daily 20 minute interactions over five consecutive days at home) before performing the gambling task in the lab. The results indicate that prior interactions impact the degree to which participants benefit from feedback during the gambling task, with overall worse learning immediately after short-term interactions with the robot and better learning in the "Self" versus "Other" condition after repeated interactions with the robot. These results indicate that "hybrid" paradigms are a suitable option to investigate social cognition in human-robot interaction when a fully dynamic implementation (i.e., interaction and measurement dynamic) is not feasible.

10.
Cogn Affect Behav Neurosci ; 21(4): 763-775, 2021 08.
Article in English | MEDLINE | ID: mdl-33821460

ABSTRACT

Social species rely on the ability to modulate feedback-monitoring in social contexts to adjust one's actions and obtain desired outcomes. When being awarded positive outcomes during a gambling task, feedback-monitoring is attenuated when strangers are rewarded, as less value is assigned to the awarded outcome. This difference in feedback-monitoring can be indexed by an event-related potential (ERP) component known as the Reward Positivity (RewP), whose amplitude is enhanced when receiving positive feedback. While the degree of familiarity influences the RewP, little is known about how the RewP and reinforcement learning are affected when gambling on behalf of familiar versus nonfamiliar agents, such as robots. This question becomes increasingly important given that robots may be used as teachers and/or social companions in the near future, with whom children and adults will interact with for short or long periods of time. In the present study, we examined whether feedback-monitoring when gambling on behalf of oneself compared with a robot is impacted by whether participants have familiarized themselves with the robot before the task. We expected enhanced RewP amplitude for self versus other for those who did not familiarize with the robot and that self-other differences in the RewP would be attenuated for those who familiarized with the robot. Instead, we observed that the RewP was larger when familiarization with the robot occurred, which corresponded to overall worse learning outcomes. We additionally observed an enhanced P3 effect for the high-familiarity condition, which suggests an increased motivation to reward. These findings suggest that familiarization with robots may cause a positive motivational effect, which positively affects RewP amplitudes, but interferes with learning.


Subject(s)
Robotics , Adult , Child , Electroencephalography , Evoked Potentials , Feedback , Humans , Reward , Social Interaction
11.
Front Neuroergon ; 2: 654597, 2021.
Article in English | MEDLINE | ID: mdl-38235251
12.
Front Psychol ; 11: 2234, 2020.
Article in English | MEDLINE | ID: mdl-33013584

ABSTRACT

Understanding and reacting to others' nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), are essential for social interactions, as it is important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals' social relevance. For example, when a gazer is believed to be an entity "with a mind" (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent's physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer's physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguous agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2), and if resolving the conflict related to the agent's categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambiguous stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze-cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze-cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction (HRI) but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.

13.
Front Robot AI ; 7: 531805, 2020.
Article in English | MEDLINE | ID: mdl-33501306

ABSTRACT

The development of AI that can socially engage with humans is exciting to imagine, but such advanced algorithms might prove harmful if people are no longer able to detect when they are interacting with non-humans in online environments. Because we cannot fully predict how socially intelligent AI will be applied, it is important to conduct research into how sensitive humans are to behaviors of humans compared to those produced by AI. This paper presents results from a behavioral Turing Test, in which participants interacted with a human, or a simple or "social" AI within a complex videogame environment. Participants (66 total) played an open world, interactive videogame with one of these co-players and were instructed that they could interact non-verbally however they desired for 30 min, after which time they would indicate their beliefs about the agent, including three Likert measures of how much participants trusted and liked the co-player, the extent to which they perceived them as a "real person," and an interview about the overall perception and what cues participants used to determine humanness. T-tests, Analysis of Variance and Tukey's HSD was used to analyze quantitative data, and Cohen's Kappa and χ2 was used to analyze interview data. Our results suggest that it was difficult for participants to distinguish between humans and the social AI on the basis of behavior. An analysis of in-game behaviors, survey data and qualitative responses suggest that participants associated engagement in social interactions with humanness within the game.

14.
J Exp Psychol Appl ; 26(3): 465-479, 2020 Sep.
Article in English | MEDLINE | ID: mdl-31829653

ABSTRACT

Humans frequently use external (environment-based) strategies to supplement their internal (brain-based) thought. In the memory domain, whether to solve a problem using external or internal retrieval depends on the accessibility of external information, judgment of mnemonic ability, and on the problem's visual features. It likely also depends on the accessibility of internal information. Here, we asked whether internal accessibility contributes to strategy choice even when visual features bear no information on internal accessibility. Specifically, 114 participants were to validate alphanumerical equations (e.g., A + 2 = C) whose visual appearance (Addends 2, 3, or 4) signified different difficulty levels. First, some equations were presented more frequently than others, allowing participants to establish efficient internal access to the correct solution via memory retrieval rather than counting up the alphabet. Second, participants viewed the equations again but could access the correct solution externally using a computer mouse. We hypothesized that external strategy use should selectively decrease for frequently learned equations and irrespectively of the task's visual features. Results mostly confirm our hypothesis. Exploratory analyses further suggest that participants partially used a sequential "try-internal-retrieval-first" mechanism to establish the adaptive behavior. Implications for intervention methods aimed at improving interactive cognition are discussed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Brain/physiology , Cognition , Memory/physiology , Problem Solving , Technology , Adult , Female , Humans , Judgment , Learning , Male , Young Adult
15.
Cogn Sci ; 43(12): e12802, 2019 12.
Article in English | MEDLINE | ID: mdl-31858630

ABSTRACT

When incorporating the environment into mental processing (cf., cognitive offloading), one creates novel cognitive strategies that have the potential to improve task performance. Improved performance can, for example, mean faster problem solving, more accurate solutions, or even higher grades at university.1 Although cognitive offloading has frequently been associated with improved performance, it is yet unclear how flexible problem solvers are at matching their offloading habits with their current performance goals (can people improve goal-related instead of generic performance, e.g., when being in a hurry and aiming for a "quick and dirty" solution?). Here, we asked participants to solve a cognitive task, provided them with different goals-maximizing speed (SPD) or accuracy (ACC), respectively-and measured how frequently (Experiment 1) and how proficiently (Experiment 2) they made use of a novel external resource to support their cognitive processing. Experiment 1 showed that offloading behavior varied with goals: Participants offloaded less in the SPD than in the ACC condition. Experiment 2 showed that this differential offloading behavior was associated with high goal-related performance: fast answers in the SPD, accurate answers in the ACC condition. Simultaneously, goal-unrelated performance was sacrificed: inaccurate answers in the SPD, slow answers in the ACC condition. The findings support the notion of humans as canny offloaders who are able to successfully incorporate their environment in pursuit of their current cognitive goals. Future efforts should be focused on the finding's generalizability, for example, to settings without feedback or with high mental workload.


Subject(s)
Cognition , Goals , Motivation , Problem Solving , Task Performance and Analysis , Female , Humans , Male , Young Adult
16.
Philos Trans R Soc Lond B Biol Sci ; 374(1771): 20180430, 2019 04 29.
Article in English | MEDLINE | ID: mdl-30852996

ABSTRACT

In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Interpersonal Relations , Prefrontal Cortex/physiology , Robotics , Transcranial Direct Current Stimulation , Adult , Cues , Female , Humans , Male , Orientation/physiology , Virginia , Young Adult
17.
J Exp Psychol Appl ; 25(3): 386-395, 2019 Sep.
Article in English | MEDLINE | ID: mdl-30702316

ABSTRACT

As nonhuman agents are integrated into the workforce, the question becomes to what extent advice seeking in technology-infused environments depends on the perceived fit between agent and task and whether humans are willing to consider advice from nonhuman agents. In this experiment, participants sought advice from human, robot, or computer agents when performing a social or analytical task, with the task being either known or unknown when selecting an agent. In the agent-1st condition, participants 1st chose an adviser and then got their task assignment; in the task-1st condition, participants 1st received the task assignment and then chose an adviser. In the agent-1st condition, we expected participants to prefer human to nonhuman advisers and to subsequently comply more with their advice when they were assigned the social as opposed to the analytical task. In the task-1st condition, we expected advice seeking and compliance to be guided by stereotypical assumptions regarding an agent's task expertise. The findings indicate that the human was chosen more often than were the nonhuman agents in the agent-1st condition, whereas adviser choices were calibrated based on perceived agent-task fit in the task-1st condition. Compliance rates were not generally calibrated based on agent-task fit. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Artificial Intelligence , Decision Making , Robotics , Humans , Social Perception
18.
J Exp Psychol Appl ; 25(1): 25-40, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30265050

ABSTRACT

Knowing the internal states of others is essential to predicting behavior in social interactions and requires that the general characteristic of "having a mind" is granted to our interaction partners. Mind perception is a highly automatic process and can potentially cause a cognitive conflict when interacting with agents whose mind status is ambiguous, such as artificial agents. We investigate whether mind perception negatively impacts performance on tasks involving artificial agents because of cognitive conflict processing caused by a potentially increased difficulty to categorize them as human versus nonhuman. Experiment 1 shows that an ambiguous humanoid stimulus negatively impacts performance on a vigilance task that is known to be sensitive to the drainage of cognitive resources. This negative effect on performance vanishes when participants are preexposed to the stimulus before the vigilance task (Experiment 2 and 3). The effect of preexposure on performance recovery is independent of whether participants explicitly resolve the cognitive conflict by answering mind-related questions (Experiment 2) or implicitly by judging the stimuli on a set of physical features (Experiment 3). Together, the findings suggest that mind perception is so automatic that it cannot be suppressed even if it has negative effects on cognitive performance. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Cognition , Conflict, Psychological , Social Perception , Visual Perception , Adult , Female , Humans , Male , Young Adult
19.
Hum Factors ; 61(2): 243-254, 2019 03.
Article in English | MEDLINE | ID: mdl-30169972

ABSTRACT

OBJECTIVE: A distributed cognitive system is a system in which cognitive processes are distributed between brain-based internal and environment-based external resources. In the current experiment, we examined the influence of metacognitive processes on external resource use (i.e., cognitive offloading) in such systems. BACKGROUND: High-tech working environments oftentimes represent distributed cognitive systems. Because cognitive offloading can both support and harm performance, depending on the specific circumstances, it is essential to understand when and why people offload their cognition. METHOD: We used an extension of the mental rotation paradigm. It allowed participants to rotate stimuli either internally as in the original paradigm or with a rotation knob that afforded rotating stimuli externally on a computer screen. Two parameters were manipulated: the knob's actual reliability (AR) and an instruction altering participants' beliefs about the knob's reliability (believed reliability; BR). We measured cognitive offloading proportion and perceived knob utility. RESULTS: Participants were able to quickly and dynamically adjust their cognitive offloading proportion and subjective utility assessments in response to AR, suggesting a high level of offloading proficiency. However, when BR instructions were presented that falsely described the knob's reliability to be lower than it actually was, participants reduced cognitive offloading substantially. CONCLUSION: The extent to which people offload their cognition is not based solely on utility maximization; it is additionally affected by possibly erroneous preexisting beliefs. APPLICATION: To support users in efficiently operating in a distributed cognitive system, an external resource's utility should be made transparent, and preexisting beliefs should be adjusted prior to interaction.


Subject(s)
Imagination/physiology , Metacognition/physiology , Psychomotor Performance/physiology , Space Perception/physiology , User-Computer Interface , Visual Perception/physiology , Adult , Humans
20.
Hum Factors ; 60(8): 1207-1218, 2018 12.
Article in English | MEDLINE | ID: mdl-30004798

ABSTRACT

OBJECTIVE: The authors investigate whether nonhuman agents, such as computers or robots, produce a social conformity effect in human operators and examine to what extent potential conformist behavior varies as a function of the human-likeness of the group members and the type of task that has to be performed. BACKGROUND: People conform due to normative and/or informational motivations in human-human interactions, and conformist behavior is modulated by factors related to the individual as well as factors associated with the group, context, and culture. Studies have yet to examine whether nonhuman agents also induce social conformity. METHOD: Participants were assigned to a computer, robot, or human group and completed both a social and analytical task with the respective group. RESULTS: Conformity measures (percentage of times participants answered in line with agents on critical trials) subjected to a 3 × 2 mixed ANOVA showed significantly higher conformity rates for the analytical versus the social task as well as a modulation of conformity depending of the perceived agent-task fit. CONCLUSION: Findings indicate that nonhuman agents were able to exert a social conformity effect, which was modulated further by the perceived match between agent and task type. Participants conformed to comparable degrees with agents during the analytical task but conformed significantly more strongly on the social task as the group's human-likeness increased. APPLICATION: Results suggest that users may react differently to the influence of nonhuman agent groups with the potential for variability in conformity depending on the domain of the task.


Subject(s)
Automation , Decision Making , Man-Machine Systems , Social Conformity , Adult , Humans , Robotics
SELECTION OF CITATIONS
SEARCH DETAIL
...